This paper presents our solutions for the MediaEval 2022 task on DisasterMM. The task is composed of two subtasks, namely (i) Relevance Classification of Twitter Posts (RCTP), and (ii) Location Extraction from Twitter Texts (LETT). The RCTP subtask aims at differentiating flood-related and non-relevant social posts while LETT is a Named Entity Recognition (NER) task and aims at the extraction of location information from the text. For RCTP, we proposed four different solutions based on BERT, RoBERTa, Distil BERT, and ALBERT obtaining an F1-score of 0.7934, 0.7970, 0.7613, and 0.7924, respectively. For LETT, we used three models namely BERT, RoBERTa, and Distil BERTA obtaining an F1-score of 0.6256, 0.6744, and 0.6723, respectively.
translated by 谷歌翻译
Deep learning models require an enormous amount of data for training. However, recently there is a shift in machine learning from model-centric to data-centric approaches. In data-centric approaches, the focus is to refine and improve the quality of the data to improve the learning performance of the models rather than redesigning model architectures. In this paper, we propose CLIP i.e., Curriculum Learning with Iterative data Pruning. CLIP combines two data-centric approaches i.e., curriculum learning and dataset pruning to improve the model learning accuracy and convergence speed. The proposed scheme applies loss-aware dataset pruning to iteratively remove the least significant samples and progressively reduces the size of the effective dataset in the curriculum learning training. Extensive experiments performed on crowd density estimation models validate the notion behind combining the two approaches by reducing the convergence time and improving generalization. To our knowledge, the idea of data pruning as an embedded process in curriculum learning is novel.
translated by 谷歌翻译
Density estimation is one of the most widely used methods for crowd counting in which a deep learning model learns from head-annotated crowd images to estimate crowd density in unseen images. Typically, the learning performance of the model is highly impacted by the accuracy of the annotations and inaccurate annotations may lead to localization and counting errors during prediction. A significant amount of works exist on crowd counting using perfectly labelled datasets but none of these explore the impact of annotation errors on the model accuracy. In this paper, we investigate the impact of imperfect labels (both noisy and missing labels) on crowd counting accuracy. We propose a system that automatically generates imperfect labels using a deep learning model (called annotator) which are then used to train a new crowd counting model (target model). Our analysis on two crowd counting models and two benchmark datasets shows that the proposed scheme achieves accuracy closer to that of the model trained with perfect labels showing the robustness of crowd models to annotation errors.
translated by 谷歌翻译
The rapid outbreak of COVID-19 pandemic invoked scientists and researchers to prepare the world for future disasters. During the pandemic, global authorities on healthcare urged the importance of disinfection of objects and surfaces. To implement efficient and safe disinfection services during the pandemic, robots have been utilized for indoor assets. In this paper, we envision the use of drones for disinfection of outdoor assets in hospitals and other facilities. Such heterogeneous assets may have different service demands (e.g., service time, quantity of the disinfectant material etc.), whereas drones have typically limited capacity (i.e., travel time, disinfectant carrying capacity). To serve all the facility assets in an efficient manner, the drone to assets allocation and drone travel routes must be optimized. In this paper, we formulate the capacitated vehicle routing problem (CVRP) to find optimal route for each drone such that the total service time is minimized, while simultaneously the drones meet the demands of each asset allocated to it. The problem is solved using mixed integer programming (MIP). As CVRP is an NP-hard problem, we propose a lightweight heuristic to achieve sub-optimal performance while reducing the time complexity in solving the problem involving a large number of assets.
translated by 谷歌翻译
The increase in the number of unmanned aerial vehicles a.k.a. drones pose several threats to public privacy, critical infrastructure and cyber security. Hence, detecting unauthorized drones is a significant problem which received attention in the last few years. In this paper, we present our experimental work on three drone detection methods (i.e., acoustic detection, radio frequency (RF) detection, and visual detection) to evaluate their efficacy in both indoor and outdoor environments. Owing to the limitations of these schemes, we present a novel encryption-based drone detection scheme that uses a two-stage verification of the drone's received signal strength indicator (RSSI) and the encryption key generated from the drone's position coordinates to reliably detect an unauthorized drone in the presence of authorized drones.
translated by 谷歌翻译
Video surveillance using drones is both convenient and efficient due to the ease of deployment and unobstructed movement of drones in many scenarios. An interesting application of drone-based video surveillance is to estimate crowd densities (both pedestrians and vehicles) in public places. Deep learning using convolution neural networks (CNNs) is employed for automatic crowd counting and density estimation using images and videos. However, the performance and accuracy of such models typically depend upon the model architecture i.e., deeper CNN models improve accuracy at the cost of increased inference time. In this paper, we propose a novel crowd density estimation model for drones (DroneNet) using Self-organized Operational Neural Networks (Self-ONN). Self-ONN provides efficient learning capabilities with lower computational complexity as compared to CNN-based models. We tested our algorithm on two drone-view public datasets. Our evaluation shows that the proposed DroneNet shows superior performance on an equivalent CNN-based model.
translated by 谷歌翻译
人群计数是公共场所情境意识的有效工具。使用图像和视频进行自动人群计数是一个有趣但充满挑战的问题,在计算机视觉中引起了极大的关注。在过去的几年中,已经开发了各种深度学习方法来实现最先进的表现。随着时间的流逝,这些方法在许多方面发生了变化,例如模型架构,输入管道,学习范式,计算复杂性和准确性提高等。在本文中,我们对人群计数领域中最重要的贡献进行了系统和全面的评论。 。尽管对该主题的调查很少,但我们的调查是最新的,并且在几个方面都不同。首先,它通过模型体系结构,学习方法(即损失功能)和评估方法(即评估指标)对最重要的贡献进行了更有意义的分类。我们选择了杰出和独特的作品,并排除了类似的作品。我们还通过基准数据集对著名人群计数模型进行分类。我们认为,这项调查可能是新手研究人员了解随着时间的推移和当前最新技术的逐步发展和贡献的好资源。
translated by 谷歌翻译
本文着重于重要的环境挑战。也就是说,通过分析社交媒体作为直接反馈来源的潜力,水质。这项工作的主要目的是自动分析和检索与水质相关的社交媒体帖子,并特别注意描述水质不同方面的文章,例如水彩,气味,味觉和相关疾病。为此,我们提出了一个新颖的框架,其中包含不同的预处理,数据增强和分类技术。总共有三个不同的神经网络(NNS)架构,即来自变形金刚(BERT)的双向编码器表示,(ii)可靠优化的BERT预训练方法(XLM-ROBERTA)和(iii)自定义长期短期内存(LSTM)模型用于基于优异的融合方案。对于基于绩效的重量分配到模型,比较了几种优化和搜索技术,包括粒子群优化(PSO),遗传算法(GA),蛮力(BF),Nelder-Mead和Powell的优化方法。我们还提供了单个模型的评估,其中使用BERT模型获得了最高的F1评分为0.81。在基于绩效的融合中,BF以F1得分得分为0.852,可以获得总体更好的结果。我们还提供了与现有方法的比较,在该方法中,我们提出的解决方案得到了重大改进。我们认为对这个相对新主题的严格分析将为未来的研究提供基准。
translated by 谷歌翻译
密集Wi-Fi网络中的设备移动性提供了几个挑战。与设备移动性相关的两个众所周知的问题是切换预测和接入点选择。由于无线电环境的复杂性,分析模型可能不会表征无线信道,这使得这些问题的解决方案非常困难。最近,使用复杂学习技术的认知网络架构越来越多地应用于这些问题。在本文中,我们提出了一种数据驱动的机器学习(ML)方案,以有效地解决WLAN网络中的这些问题。评估所提出的方案,并将结果与​​上述问题的传统方法进行比较。结果通过应用提出的计划报告了网络性能的显着提高。例如,提出的切换预测方案优于传统方法I.。RSS方法和行驶距离方法分别将不必要的切片数减少60%和50%。类似地,在AP选择中,所提出的方案通过分别实现高达9.2%和8%的吞吐量提高,优于SSF和LLF算法。
translated by 谷歌翻译
Objective: Despite numerous studies proposed for audio restoration in the literature, most of them focus on an isolated restoration problem such as denoising or dereverberation, ignoring other artifacts. Moreover, assuming a noisy or reverberant environment with limited number of fixed signal-to-distortion ratio (SDR) levels is a common practice. However, real-world audio is often corrupted by a blend of artifacts such as reverberation, sensor noise, and background audio mixture with varying types, severities, and duration. In this study, we propose a novel approach for blind restoration of real-world audio signals by Operational Generative Adversarial Networks (Op-GANs) with temporal and spectral objective metrics to enhance the quality of restored audio signal regardless of the type and severity of each artifact corrupting it. Methods: 1D Operational-GANs are used with generative neuron model optimized for blind restoration of any corrupted audio signal. Results: The proposed approach has been evaluated extensively over the benchmark TIMIT-RAR (speech) and GTZAN-RAR (non-speech) datasets corrupted with a random blend of artifacts each with a random severity to mimic real-world audio signals. Average SDR improvements of over 7.2 dB and 4.9 dB are achieved, respectively, which are substantial when compared with the baseline methods. Significance: This is a pioneer study in blind audio restoration with the unique capability of direct (time-domain) restoration of real-world audio whilst achieving an unprecedented level of performance for a wide SDR range and artifact types. Conclusion: 1D Op-GANs can achieve robust and computationally effective real-world audio restoration with significantly improved performance. The source codes and the generated real-world audio datasets are shared publicly with the research community in a dedicated GitHub repository1.
translated by 谷歌翻译